recent

5 actions to elevate customer experience in physical retail

Scope 3 emissions top supply chain sustainability challenges

3 ways to improve the mortgage market

Credit: Mimi Phan / jsolie / iStock

Ideas Made to Matter

Artificial Intelligence

How to tap AI’s potential while avoiding its pitfalls in the workplace

By

Since the Industrial Revolution, many of the biggest advances in how people work — from the org chart to the assembly line to the two-week development sprint — have emphasized getting the most from human capabilities.

Artificial intelligence has a different impact on the workplace, according to Ethan Mollick, SM ’04, PhD ’10. This presents a new challenge to leaders.

“When there’s only human intellect in the room, we need to have human monitoring, human coaching, and human performance,” said Mollick, an associate professor of management at the Wharton School and author of a new book, “Co-Intelligence: Living and Working With AI.” “Now we have access to a different form of intelligence. How do we build a system of control around it that takes advantage of what it does well but avoids the disasters that might result if it does it badly?”

Speaking at the recent Work/24 event hosted by MIT Sloan Management Review, Mollick discussed what organizations should expect from their AI models, how companies can effectively and practically experiment with AI, and which concerns about AI warrant the most attention.

Account for AI errors

Using AI wisely requires an understanding of how it differs from other software processes. AI models receive inputs and generate outputs like most software applications, but they work more like humans because they’re trained on human writing, Mollick said.

This means organizations need to reset their expectations about reliability. AI models will make errors. The question is whether they’re better or worse than human errors, Mollick said.

“We obviously shouldn’t be using it to run a nuclear arsenal at this point, and I would not be taking it as your only source of medical advice,” he said. However, research has shown that GPT-4 can help diagnose complex medical cases, so it may offer a valuable second opinion. “How we work all those things together is an open question,” Mollick said.

He suggested using the Best Available Human standard: Is the AI more reliable than the most reliable human available at that moment? If so, then the AI model is worth using — but its output should still be checked, especially given the importance of the decisions that many workers make.

“As soon as the AI model is good enough, everyone tends to fall asleep at the wheel. They stop paying attention to what the AI can do and can’t do, and they don’t check the results,” Mollick said. “Just like we build processes around human issues, errors, and results, we’re going to have to do the same thing with AI.”

Demonstrate successful, low-risk use

As organizations build processes for working with AI, leaders need to set careful parameters for how it will be used, Mollick said. There should be a clear distinction between using AI for financial, legal, or health care tasks that must meet compliance requirements and for less-risky applications, such as using AI for creative inspiration.

“What I’m seeing from a lot of organizations is very vague guidelines that absolutely discourage anyone from using AI,” Mollick said. “You want to model successful use, rather than shutting things down. People think of it in terms of policymaking but not in terms of culture or change.”

Experienced IT leaders know that all-out bans only lead to shadow IT, he said. It happened with smartphones and tablets, and now it’s happening with AI. Mollick recalled an ironic conversation with an organization that had a policy banning the use of ChatGPT. The policy was written using ChatGPT.

One recommendation for getting over the hump is to think about AI use in terms of accountability. Workers are held responsible for errors but not necessarily for bad ideas. If the risk associated with AI use is minimal, then it shouldn’t be discouraged.

Another tip: Ask executives to use AI so they understand what it’s good and bad at, learn how to write effective prompts, and can model responsible behavior from the top down. “Just have people use these systems themselves. This is the ultimate user innovation,” Mollick said.

Worry about the right things

Amid AI’s potential, leaders likely wonder what AI will mean for the future of their organizations and the workers they manage. Mollick said some common concerns are overblown, while others are worthy of more attention.

“It’s a little weird to me that people are so worried about privacy,” he said. This worry is particularly misplaced given how many enterprises use cloud-based email and file storage services. Organizations should be fine if their employees know to avoid free chatbots that use data inputs to train models, he said.

A lightbulb with the abbreviation "Ai" on it seems to be flying like a rocket ship

AI Executive Academy

In person at MIT Sloan

The need for proprietary models is also overrated. “Bigger models are generally better,” Mollick said, because they’ve been trained on larger data sets. Mollick noted that Bloomberg’s $10 million custom stock-trading large language model was outperformed by GPT-4, which had no additional customization.

On the other hand, Mollick noted that two areas of concern require significant discussion. One is how AI models can replicate human biases — for example, when the technology is used to make personnel decisions such as highlighting key qualities in letters of recommendation or making a job offer. Here, organizations will need to keep a human in the loop. (This also falls into the category of tasks that must meet legal requirements.)

The other is AI’s impact on entry-level work. Mollick said he worries that automation will “break” the apprenticeship model as AI continues to boost productivity and outperform humans on the low-level tasks that workers complete for hands-on training experience.

“In most cases, I would argue that AI is already better than most of your interns,” he said. “Are you still going to train people in the same way when the deal is broken? The deal is, I help train you, you do work. If I get the work done another way, am I still training people?”

Watch all the videos from the Work/24 symposium

For more info Sara Brown Senior News Editor and Writer